|
Author |
Thread Statistics | Show CCP posts - 11 post(s) |

Darth Sith
SiN. Corp Sons of Tangra
|
Posted - 2009.06.10 20:28:00 -
[1]
As an IBM server engineer (when I'm not playing eve :)) all this talk of 3850 M2's is making me randy :P
Awesome work guys .. I was in a fleet fight 2 days ago .. 146 in local .. no lag at all. It has come a long way from the 30 - 60 sec module lag of the old days!
|

Darth Sith
SiN. Corp Sons of Tangra
|
Posted - 2009.06.11 05:10:00 -
[2]
Originally by: Morphisat Do you run the servers in core mode ? Would reduce some overhead and it's a nice new feature of server 2008.
Last time I checked, SQL 2005 or 2008 were not supported by MS in core mode....
|

Darth Sith
SiN. Corp Sons of Tangra
|
Posted - 2009.06.11 05:17:00 -
[3]
Originally by: Entipathy @The Lag People: It will likely not help much as the fights don't directly hit the database. However the news that they finished upgrading the SOL nodes will help the lag. The biggest limiter in the fleet fights now is both client side rendering and sheer brute calculations server side. The next big step they need to take is to make nodes able to span multiple processor cores.
This might be accomplished using VMware ESX, not sure if CCP has looked into this technology but it might resolve some of your single theard issues.
ESX just allows you to put a whole lot of O/S instances onto the hardware. It really has no benifit to a single threaded application other then to allow you to run many of the workloads in isolation on the same chunk of hardware. On earlier blogs CCP has already alluded to the fact that they are running many solar systems on a single node indicating that they are using processor affinity to run multiple single threaded instances on one O/S image without a hypervisor. While I am a huge fan of ESX it would only introduce more issues then it is worth as only CPU workloads execute with very little overhead. Both disk and network transactions inherit a level of increased latency due to having to mess with virtualized adapters.
|

Darth Sith
SiN. Corp Sons of Tangra
|
Posted - 2009.06.12 00:01:00 -
[4]
Originally by: Entipathy Edited by: Entipathy on 11/06/2009 07:24:12 Network adapters on ESX operate at the same rate as the CPU and if you are like my company and invest into fiber disk arrays you bandwidth is almost 2 x that of any current server based technology. As far as single theard application ESX allows you to utilize all cpu as if they are one. So in the case of my setup below we have 21000Mhz avaliable which can be shared by many or used by one virtual machine.
Other features that ESX offers are the ablility to move virtual machines between hosts on the fly as well as moving them between datastore these technology are called Vmotion and Storage Vmotion. Which would be good for reinforcing nodes as you could move any other server off the hardware.
I personally manage and ESX environment with 6 IBM 3650 with over 6 terabyes of 4gb fiber storage. I'd like to think I know what I am talking about.
This is what one of our servers looks like http://img269.imageshack.us/img269/8706/esxftw.jpg
I dont presume to understand the complexity of tranquility, but CCP if you haven't already looked into ESX then I highly suggest you do.
Um .. well if we're whipping our epeens out, I am the senior most scalable systems / virtualization specialist for IBM in Canada with one customer alone running 2700 vm's on 3850 m2's and collectivly have architected systems for over 10,000 vms. That said:
"Network adapters on ESX operate at the same rate as the CPU"
Only in new ESX 4 with VT for I/O where you can allocate a Nic physically to a VM. Downside is that the moment you do that you can no longer do the fancy stuff like VMotion
"ESX allows you to utilize all cpu as if they are one"
UM .. That one made me LOL .. no each vm executes on discreet cores. Only way for a vm to execute on multiple cores is to allocate multiple vCPU's to the VM which will, if not done correctly on a system with enough cores cause scheduler latency issues as the vm can only execute when enough cores are avalable to service the number of vpcu's alocated to the vm. No you do not get a dozen cores showing up as a phat huge proc just because the performance counter measures things in available MHZ :)
"if you are like my company and invest into fiber disk arrays you bandwidth is almost 2 x that of any current server based technology."
Actually fibrechannel is SCSI 3 over a fibre protocol. so you actually inherit a slight latency encapsulating over the SAN to talk to the disks. So disk for disk, direct attach is actually faster however SAN connectivity gives you all the sweet capabilities like mapping the vmfs luns to multiple hosts for parallel filesystem access.
Anyone interested in more detail can always fire me a mail in game. Out of respect for this thread and because I have already hijacked enough of this thread this will be my last post here.
Darth
|
|
|
|